- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources2
- Resource Type
-
0000000002000000
- More
- Availability
-
20
- Author / Contributor
- Filter by Author / Creator
-
-
Freiburger, Erin (2)
-
Hugenberg, Kurt (2)
-
Sim, Mattea (2)
-
Hutchings, Ryan (1)
-
Hutchings, Ryan_J (1)
-
Kohno, Tadayoshi (1)
-
Owens, Kentrell (1)
-
Roesner, Franziska (1)
-
#Tyler Phillips, Kenneth E. (0)
-
#Willis, Ciara (0)
-
& Abreu-Ramos, E. D. (0)
-
& Abramson, C. I. (0)
-
& Abreu-Ramos, E. D. (0)
-
& Adams, S.G. (0)
-
& Ahmed, K. (0)
-
& Ahmed, Khadija. (0)
-
& Aina, D.K. Jr. (0)
-
& Akcil-Okan, O. (0)
-
& Akuom, D. (0)
-
& Aleven, V. (0)
-
- Filter by Editor
-
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
& Sahin. I. (0)
-
& Spitzer, S. (0)
-
& Spitzer, S.M. (0)
-
(submitted - in Review for IEEE ICASSP-2024) (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
We applied techniques from psychology --- typically used to visualize human bias --- to facial analysis systems, providing novel approaches for diagnosing and communicating algorithmic bias. First, we aggregated a diverse corpus of human facial images (N=1492) with self-identified gender and race. We tested four automated gender recognition (AGR) systems and found that some exhibited intersectional gender-by-race biases. Employing a technique developed by psychologists --- face averaging --- we created composite images to visualize these systems' outputs. For example, we visualized what an average woman looks like, according to a system's output. Second, we conducted two online experiments wherein participants judged the bias of hypothetical AGR systems. The first experiment involved participants (N=228) from a convenience sample. When depicting the same results in different formats, facial visualizations communicated bias to the same magnitude as statistics. In the second experiment with only Black participants (N=223), facial visualizations communicated bias significantly more than statistics, suggesting that face averages are meaningful for communicating algorithmic bias.more » « less
-
Hutchings, Ryan_J; Freiburger, Erin; Sim, Mattea; Hugenberg, Kurt (, Psychological Science)What makes faces seem trustworthy? We investigated how racial prejudice predicts the extent to which perceivers employ racially prototypical cues to infer trustworthiness from faces. We constructed participant-level computational models of trustworthiness and White-to-Black prototypicality from U.S. college students’ judgments of White (Study 1, N = 206) and Black–White morphed (Study 3, N = 386) synthetic faces. Although the average relationships between models differed across stimuli, both studies revealed that as participants’ anti-Black prejudice increased and/or intergroup contact decreased, so too did participants’ tendency to conflate White prototypical features with trustworthiness and Black prototypical features with untrustworthiness. Study 2 ( N = 324) and Study 4 ( N = 397) corroborated that untrustworthy faces constructed from participants with pro-White preferences appeared more Black prototypical to naive U.S. adults, relative to untrustworthy faces modeled from other participants. This work highlights the important role of racial biases in shaping impressions of facial trustworthiness.more » « less
An official website of the United States government
